Part Number Hot Search : 
FCS26SG 03CHE1 74VCX32 KBPC1005 6A05G LR8256 0515S SED1566
Product Description
Full Text Search
 

To Download POWERPC603 Datasheet File

  If you can't view the Datasheet, Please click here to try to view without PDF Reader .  
 
 


  Datasheet File OCR Text:
  risc microprocessor division page 1 optimizing instruction execution in the powerpc? 603e superscalar microprocessor by top changwatchai skipper smith nasr ullah motorola risc microprocessor division system performance modeling and simulation document 1998-014-1 version 1.0 wtc, ss, nu
risc microprocessor division page 2 outline ? superscalar microprocessor architectures ? the powerpc 603e superscalar microprocessor ? stall conditions C dispatch and completion stalls C execution unit stalls C load and store stalls C instruction interaction stalls ? summary
risc microprocessor division page 3 this presentation discusses techniques for optimizing instruction execution in a superscalar microprocessor architecture such as the powerpc? 603e microprocessor. instruction execution in a superscalar processor is enhanced by allowing the parallel execution of multiple instructions. in order to enable the maximum potential of most superscalar processors, one needs to be aware of their instruction flow and execution mechanisms. optimal performance in a microprocessor can be attained by ensuring a continuous flow of instructions through the instruction pipeline. being aware of the dependencies and constraints of the instruction flow mechanisms allows one to generate code that can most effectively and optimally take advantage of all the capabilities of a superscalar processor such as the powerpc 603e microprocessor. the 603e is a low-power implementation of the powerpc family of reduced instruction set computer (risc) microprocessors. the 603e is a superscalar processor capable of issuing and retiring as many as three instructions per clock. instructions can execute out-of-order for increased performance, but they retire in-order to ensure functional correctness and well-ordered behavior. in this paper, we first discuss the instruction flow mechanism of the powerpc 603e microprocessor and then describe dependencies and constraints that should be avoided to reduce stalls in the instruction pipeline and maximize performance. by closely examining the instruction flow mechanism of the 603e, a software developer will not only be able to optimize code for the 603e, but will also be able to understand some of the general principles behind superscalar microprocessors that can impact performance.
risc microprocessor division page 4 common superscalar characteristics ? multiple instruction dispatch C ability to fetch and dispatch more than one instruction at a time ? multiple functional units C ability to execute in parallel more than one instruction C out of order execution ? multiple instruction retirement C multiple instruction completion of the instructions ? multiple read/write ports to register file set ? mechanisms to avoid false dependencies
risc microprocessor division page 5 there are many characteristics found in common among contemporary superscalar processors. one characteristic is the ability to execute multiple instructions in parallel. to enable this, superscalar processors contain multiple functional units, and they allow the fetching, issuing (dispatching), and retiring of multiple instructions in one clock cycle. superscalar processors typically contain a register set with multiple read/write ports to allow multiple instructions in execution to access data simultaneously. most superscalar processors also have separate floating-point and integer register sets. the key focus of superscalar processors is to increase overall instruction throughput by keeping the instruction pipeline free of stalls. mechanisms exist to avoid false dependencies between instructions; instructions should be allowed to dispatch and execute until they are forced to stall due to change in instruction flow, lack of resources, or true data dependencies. to allow the free flow of execution, most superscalar processors allow out-of-order execution of instructions. however, a mechanism must exist for bringing the instructions back in program order when they complete executing. the primary reasons the instruction flow can stall within a superscalar processor are changes in instruction flow, resource constraints, and data dependencies. by understanding the flow mechanism, and being aware of the situations that can cause stalls, one can write code that avoids these situations and thereby executes faster.
risc microprocessor division page 6 block diagram of the 603e sequential fetcher instruction queue completion unit 64-bit 64-bit 64-bit i i i i n n n n s s s s t t t t r r r r u u u u c c c c t t t t i i i i o o o o n n n n u u u u n n n n i i i i t t t t 64-bit 3 32-bit buses 64- bit 64- bit instruction dispatch instruction flow signalling ctr cr lr branch processing unit system register unit integer unit gpr file 5 renames load/ store unit fpr file 4 renames fp unit
risc microprocessor division page 7 like most other superscalar processors, the 603e features pipelined execution flow, in which the processing of an instruction is split into discrete stages. each stage is able to handle a different instruction, allowing multiple instructions to be in execution at once. for example, it may take three cycles for a floating-point instruction to complete (three-cycle latency), but if there are no stalls in the floating-point pipeline, then a series of floating-point instructions can have a throughput of one instruction per cycle. the 603e processor core consists of a fetcher, a dispatcher, and five execution units: an integer unit (iu), a floating-point unit (fpu), a branch processing unit (bpu), a load/store unit (lsu), and a system register unit (sru). an instruction queue (iq) holds up to six instructions that are fetched in and waiting for dispatch. a completion queue (cq) holds up to five instructions that have dispatched and are waiting to be finished and retired.
risc microprocessor division page 8 registers and execution units ? execution units C integer unit C system register unit C floating-point unit C branch unit C load/store unit ? registers C integer registers C floating point registers ? execution unit/register interaction C rename registers C data forwarding
risc microprocessor division page 9 execution units ? the integer unit accepts all integer instructions. ? the system register unit accepts all synchronizing, condition register, and system register instructions. since these instructions appear infrequently, the sru also accepts basic add and compare instructions. ? the floating-point unit accepts all instructions utilizing the fp registers (other than loads and stores). ? the branch processing unit redirects instruction fetches, performs prediction and helps control speculative execution, and folds appropriate branches out of the pipeline to permit an effective branch cycle time of zero. ? the load/store unit accepts instructions accessing data cache and memory. registers the 603e supports 32 general-purpose registers (gprs) that are 32 bits wide, and 32 floating-point registers (fprs) that are 64 bits wide. these two register files are supported by rename registers which allow quick forwarding of data, in order to reduce stalls based on data dependencies. there are five gpr rename registers and four fpr rename registers. as an example of usage, suppose we have an integer divide instruction which is computing the value of a given gpr, followed by a store instruction which must store this value to memory. when the divide is dispatched to the iu, the gpr is assigned a gpr rename register. the store is then dispatched to the lsu and must wait for the value to become valid. when the value is computed, the store immediately gets the value through the gpr rename bus, and can begin storing the value at the same time it is being written back to the gpr file.
risc microprocessor division page 10 instruction pipeline fetch dispatch execute finish retire in program order out-of-order
risc microprocessor division page 11 the fetcher fetches up to two instructions per clock into the iq, where they appear in the lowest available slots. the dispatcher then dispatches up to two instructions from the iq to the execution units (excluding the bpu, which scans instructions as they are brought in by the fetcher). the dispatcher also performs source and destination dependency checking and determines dispatch serializations. each instruction dispatched has an entry created for it in the completion queue. when an execution unit has finished processing an instruction, it signals the completion unit, and the instructions entry in the completion queue is marked finished. up to two finished instructions per clock may be retired (removed) from the completion queue. when an instruction is retired, the architectural registers are updated. note that fetching, dispatching, and retiring of instructions is done in program order, but executing and finishing can be done out-of-order and in parallel. in a pipelined architecture, anything that prevents an instruction from moving from one stage of a pipeline to the next is known as a stall. resource checks must be performed to see if stalls will occur. the rest of this paper discusses how and where stalls can occur in the instruction pipeline.
risc microprocessor division page 12 dispatch & completion sequential fetcher instruction queue completion unit 64-bit 64-bit 64-bit i i i i n n n n s s s s t t t t r r r r u u u u c c c c t t t t i i i i o o o o n n n n u u u u n n n n i i i i t t t t 64-bit 3 32-bit buses 64- bit 64- bit instruction dispatch instruction flow signalling ctr cr lr branch processing unit system register unit integer unit gpr file 5 renames load/ store unit fpr file 4 renames fp unit
risc microprocessor division page 13 the dispatcher and completion unit control the execution of instructions. interactions between the dispatcher and completion unit and the various execution units can reduce potential stalls in the instruction pipeline. in the next few slides, we discuss these interactions. the dispatcher is capable of buffering up to 6 instructions (in the instruction queue). however, instructions must dispatch in-order out of the dispatcher and only from the bottom two slots (an exception to this rule occurs with branch folding, discussed later). if the instruction in the bottom slot is not capable of dispatching, then the instruction in the second slot cannot, either. it is the job of the dispatcher to determine whether or not an execution unit is capable of accepting an instruction. the dispatcher will stall instruction dispatch when the instructions awaiting dispatch requires an execution unit that is unavailable, or will stall the second instruction if both instructions awaiting dispatch need the same execution resource. the completion unit is capable of buffering up to 5 instructions (in the completion queue). the completion unit records the proper order of dispatch to enforce in-order completion. while instructions are being tracked, the completion unit also keeps a record of exceptions generated, speculation, out-of- order finishing, etc. all instructions except folded branches must be tracked in the completion unit. the completion unit assigns rename registers (up to 5 integer and 4 floating point) to the instructions as they dispatch. the completion unit will stall the dispatcher if no appropriate rename register resources are available. additionally, if there are no slots available in the completion queue, the completion unit will order the dispatcher to stall the dispatching of instructions.
risc microprocessor division page 14 branching opcode mnemonic addressing range branch always b relative +/- 32mb ba absolute 0 +/- 32mb branch conditional bc (condition field(s)) relative +/- 32kb bca (condition field(s)) absolute 0 +/- 32kb branch conditional bcctr (condition field(s)) count reg. 4 gb to count register branch conditional bclr (condition field(s)) link reg. 4 gb to link register
risc microprocessor division page 15 the powerpc architecture instruction set includes of two types of branches, unconditional (branch always) and conditional. conditional branches can depend on the contents of the condition register (cr), which can be set by compare and arithmetic instructions; on the contents of the count register (ctr), which is typically used when executing looping instructions; or on both the cr and the ctr. branch instructions can specify an absolute or relative target address, or they can branch to the link register (lr) or ctr. the lr is typically used for subroutine calls, and the ctr (if specified as a destination address) is typically used for absolute jumps.
risc microprocessor division page 16 instruction queue 0 1 2 3 4 5 branch processing unit fetcher dispatcher branch processing unit instruction cache execution units completion queue branch folding
risc microprocessor division page 17 when the fetcher fetches instructions into the instruction queue (iq), it also forwards them to the branch processing unit (bpu), which scans these instructions for branches. the bpu immediately begins address calculation for branches found and attempts to fold certain branches out of the instruction queue (discussed later). because branch instructions can change the instruction flow, they can potentially cause stalls in the instruction pipeline when new instructions must be fetched from the target address. the 603e includes two mechanisms for reducing the impact of branch instructions: branch folding and branch prediction.
risc microprocessor division page 18 branch folding instruction queue 0 1 2 3 4 5 a b c d 0 1 2 3 4 completion queue 0 1 2 3 4 5 a b f d 0 1 2 3 4 e 0 1 2 3 4 5 a b 0 1 2 3 4 0 1 2 3 4 5 a b 0 1 2 3 4 c d
risc microprocessor division page 19 the branch processing unit (bpu) can fold certain branches out of the instruction queue. they are removed from the iq before being dispatched, allowing the dispatcher to handle other instructions, and freeing space in the instruction queue and completion queue for other instructions. frequently, instruction flow can continue as if the branch had not occurred. the bpu can fold all unconditional branches, as well as conditional branches that do not involve the ctr or lr. conditional branches that do involve these registers cannot be folded because the ctr and lr have corresponding rename registers which can only be tracked if branches using them get recorded in the completion queue by being dispatched. consider the left two columns of diagrams. we start with four instructions in the instruction queue. instruction c is a branch. in the second column, we see that instructions a and b have been dispatched and have entries in the completion queue, and that instruction c has been folded out by the bpu. instructions e and f have also been fetched in. because superscalar processors feature multiple units that are attempting to flow instructions through their pipelines as quickly as possible, race conditions between various resources can occasionally arise. one race condition occurs in the instruction queue: if the dispatcher can tag a branch for dispatch before the bpu can fold it out of the instruction queue, then the branch will not be folded; it will be dispatched and an entry created for it in the completion queue. this situation typically occurs if the iq is empty or near-empty and the foldable branch is fetched directly into one of the bottom two slots (i.e. the slots from which instructions are dispatched). however, the performance impact of this race condition is negligible. the right two columns illustrate the branch race condition. instructions a and b have just been fetched into the instruction queue, with a being a branch. in this case, the dispatcher grabs a before it can be folded, and we see it in the completion queue in the next cycle.
risc microprocessor division page 20 branch code stall example for...next w/ bdnz for...next w/ subi./bgt li r13,count li r13,count mtspr ctr,r13 loop: loop: subi. r13,0x0001 ;do some ;do some ;useful work ;useful work bdnz loop bgt loop
risc microprocessor division page 21 in this slide, we depict a potential stall that can occur with branches. the code fragments demonstrate how, in some cases, one can use branches that are foldable to attain better performance than using non-foldable branches. the two loops repeat for count iterations. the first code fragment initializes the ctr and uses only one instruction to control the looping, bdnz . ( bdnz is a simplified mnemonic for a conditional branch which decrements the ctr and branches if ctr is not zero.) this branch cannot be folded and must be dispatched. since branches that dispatch are required to retire from the last stage of the completion unit, any loop involving a branch that dispatches may need an extra clock (in addition to the loop body time) to complete execution. it is possible to avoid the additional latency by using a foldable branch instead of the bdnz . the bgt and the subi. instructions in the second code fragment can be used to obtain the same functionality as the bdnz . the subi. instruction is a single cycle instruction that can retire paired with almost any other instruction; thus in most loops, subi. adds no time to the execution of that loop. the bgt is also capable of being folded out of the pipeline and not dispatching at all. therefore, code that uses the subi. / bgt combination will likely be a clock faster each time through the loop then bdnz . however, the exact timing difference, if any, would depend on the actual composition of the loop body.
risc microprocessor division page 22 instruction queue 0 1 2 3 4 5 branch prediction and speculative execution a b c d 0 1 2 3 4 5 b d e f 0 1 2 3 4 completion queue 0 1 2 3 4 a 0 1 2 3 4 5 e f 0 1 2 3 4 a b d g h 0 1 2 3 4 5 0 1 2 3 4 a b
risc microprocessor division page 23 each conditional branch instruction includes a prediction bit, which is set by the compiler or an assembly language programmer. this bit helps specify whether the branch is predicted to be taken or not taken. this is known as static prediction because the prediction behavior is encoded in the instruction. while the branch condition is waiting to be resolved, execution continues down the predicted path, and these subsequent instructions are marked as speculative instructions . (speculative instructions are not allowed to change the programming model, such as update register files or memory, and may stall until the branch is resolved and they become non-speculative.) when the branch condition is resolved, if the prediction was correct, then the speculative instructions are marked non-speculative, and no penalty is assessed. if the prediction was incorrect, then the speculative instructions are flushed (removed from the instruction pipeline) and execution resumes along the correct execution path. the 603e has one level of prediction, meaning that a conditional branch encountered along a speculative path cannot itself be executed speculatively. instead, it will stall in the pipeline until the previous branch is resolved. in the leftmost diagram, we have instructions a, b, c, and d in the instruction queue. instruction c is a branch. in the next diagram (next cycle), instruction a was dispatched and c folded out by the bpu. however, assume that branch c cannot be resolved (perhaps it is dependent on the results of instruction a). all subsequent instructions are then marked speculative: d and the newly fetched instructions e and f. in the next diagram, we see that b and d were dispatched to the cq and g and h fetched into the iq. in our example, branch c is now resolved and it turns out the branch was mispredicted. in the final diagram, the speculative instructions are flushed, and the fetcher is ready to fetch instructions from the correct input stream. if branch c had been correctly predicted, the speculative instructions would simply be marked non-speculative and no stall would occur.
risc microprocessor division page 24 per f ormance impact o f branch prediction ? speculative execution allows instruction flow to proceed before branch conditionals have been resolved ? correct predictions incur no performance penalty ? incorrect predictions only incur significant performance penalties when mispredicted paths result in instruction cache misses ? incorrect predictions may be avoided by separating the instruction that sets a branch condition from the branch that uses it
risc microprocessor division page 25 speculative execution allows the fetcher to fetch instructions without stalling while the branch is being resolved. prediction does not cause any pipeline stalls unless the prediction is deemed to be incorrect. if the prediction is incorrect, it is the function of the bpu to perform the necessary tasks to recover from speculation. branch prediction of the type used by the 603e is correct approximately 86% of the time. due to the 603es ability to invert the normal prediction mechanism, a smart programmer or compiler can attain greater prediction accuracy. mispredicted branches, which occur infrequently even using only the default speculation mechanism, only incur significant performance penalties when speculative branches also result in cache misses on the mispredicted path. since incorrect predictions can potentially cause many stalls, it is possible to improve performance by avoiding prediction in some code fragments. by separating the instruction that is setting the branch condition from the branch that uses it, it is possible to prevent the processor from executing speculatively altogether. in the 603e, we can calculate the approximate separation distance by using worst case analysis for a conditional branch dependent on the cr register. assuming that the processor dispatches 3 instructions per clock (2 instructions and a unconditional branch or nop), and assuming a worst case conditional register update time of 3 clocks, we calculate that by separating the branch condition from the condition register update instruction by 9 instructions, we will avoid speculative execution. for most code fragments, the 603e can dispatch instructions at a peak rate of 2 instructions. additionally, most instructions (such as the compare instruction) take only 1 clock to update the condition register. under these conditions, one can prevent speculative execution by separating the branch condition from the condition register update instruction by only 3 instructions.
risc microprocessor division page 26 i nteger u n i t an d system register unit ? integer unit C no stalls caused by single-cycle instructions C multi-cycle instructions keep the integer unit busy C possible stalls due to dependencies minimized by allowing access to operands as soon as the source data is valid ? system register unit C handles access to system registers C assists the integer unit by handling some integer unit operations
risc microprocessor division page 27 integer unit most integer instructions only take one cycle to execute; thus the integer unit does not usually stall. the only times that the integer unit stalls is if it is executing multiple-clock integer instructions such as trap, multiply, and divide, or if the instruction cannot execute because it is dependent on the results of another operation. the internal bus structure of the 603e allows an integer instruction to immediately access any operand as soon as it becomes valid. system register unit the sru handles all of the special purpose register instructions, context synchronizing instructions, and certain integer add/compare operations. some special purpose register instructions are also inherently context synchronizing. context synchronization will always cause some instruction stall, but this is almost always critical to guarantee correct operation. integer operations in the sru take only one cycle to execute, thereby causing no stalls.
risc microprocessor division page 28 floating point unit 4 64 bit floating point rename registers 32 64 bit floating point register file multiplier adder normalizer exceptions monitor fetcher/dispatch unit iq0 iq1 iq2 iq3 iq4 iq5 completion unit cq0 cq1 cq2 cq3 cq4
risc microprocessor division page 29 the floating-point unit consists of four stages: multiplier, adder, normalizer, and exception, which are organized conceptually as shown. the multiplier stage is a single precision multiplier that every fp instruction must pass through. no instruction can enter the fp unit if the multiplier is occupied. double precision operations will cause the multiplier to be occupied for two consecutive clocks. the adder always takes a single clock. typically, instructions will flow through these stages without stalling unless a stall in the normalize or exception stage blocks the instruction pipeline flow. the fp register file only supports a single write-back port from the rename registers. the normalizer stage can cause delays of up to several clocks. the number of clocks that normalization takes is data-dependent. when the normalizer stalls, it prevents instructions in the multiplier or adder stages from stepping through. to prevent potential speed path problems, an additional stage exists after the normalization stage. this stage is simply a holding stage and floating-point instructions use up one clock cycle to pass through it.
risc microprocessor division page 30 fpu stall conditions 4 64 bit floating point rename registers 32 64 bit floating point register file multiplier adder normalizer fetcher/dispatch unit iq0 iq1 iq2 iq3 iq4 iq5 v v v disable dispatch ? stall conditions C normalization blocking dispatch C late-release of fp rename registers C enabling exceptions
risc microprocessor division page 31 as previously mentioned, the normalizer stage can take multiple cycles, thereby stalling the flow of instructions within the fpu. when fp instructions occupy the normalizer, multiplier, and adder stages at the same time, a signal will be sent to the dispatcher, halting dispatch of instructions to the floating- point unit. even if normalization doesnt stall the pipeline, the distance between the normalizer and dispatcher prevents the fpu from informing the dispatcher to resume dispatching until after it is too late to dispatch an instruction on that clock. this causes a stall after every third consecutive single cycle fp instruction. the wait stage that exists after the normalization stage also contributes to potential stalls. the additional wait stage causes fpr rename registers to be released one cycle after the fp operation is complete. this causes a stall if a series of single-cycle fp instructions are executing in the fpu. after every fourth single cycle fp instruction, a stall will occur due to lack of fpr rename registers. these two stall scenarios cause a series of single-cycle fp instructions to dispatch in clocks 1, 2, 3, 5, 7, 8, 9, 11, 13, 14, 15, etc. the next slide depicts the stall scenarios described above. finally, if exception checking is enabled in the fpu, the instruction may have to wait in the normalizer while exceptions are checked. one can enhance performance by pre-qualifying data prior to running it and polling for possible exceptions at the last reasonable instant.
risc microprocessor division page 32 clock fpu code stalls a d f a a a a b b b b b b f d d d d d f f c c c c c c d d d d d d d d e e e e e e e f f f f f f f g g g g g g d h h h h h h 0 8 1 2 3 4 7 6 5 instruction queue completion queue renames a a, b a, b, c a, b, c a, b, c, d b, c, d c, d, e d, e, f d = marked for dispatch f = marked finished i i i j i j i j k
risc microprocessor division page 33 series of single-cycle fp instructions clock 0 : first two instructions ( a and b ) are brought into the instruction queue (iq). a is marked for dispatch. clock 1 : a is dispatched to the multiplier stage of the fpu and is allocated fp rename register 0. b is marked for dispatch. c and d are brought into the iq. clock 2 : a steps to the adder stage in the fpu. b is dispatched to the multiplier stage and is allocated fp rename register 1. c is marked for dispatch. e is brought into the iq. (anything after e is ignored for this discussion.) clock 3 : a steps to the normalizer stage in the fpu. b steps to the adder stage. c is dispatched to the multiplier stage and is allocated fp rename register 2. at this point, a signal is sent to the dispatcher indicating that no instruction may be dispatched to the fpu until a stage has been freed up. this signal is negated as soon as the normalizer stage is finished, but this will be too late to actually permit an instruction to dispatch on the next clock. d , therefore, stalls in the iq (is not marked for dispatch). clock 4 : a steps to a wait stage in the fpu. a signal has been sent to the completion unit indicating that a is finished and, since it is the oldest instruction in the completion queue, it is permitted to retire. b steps to the normalizer stage. c steps to the adder stage. d is given permission to dispatch on the next clock. continued on next slide
risc microprocessor division page 34 clock 5 : a is gone from the completion queue, but a delay on fp rename register deallocation prevents fp rename register 0 from being re-allocated. b is finished and permitted to retire. c steps to the normalizer stage. d is dispatched to the fp multiplier stage and is allocated fp rename register 3. at this point, all four fp rename registers are in use, which means e cannot be marked for dispatch this cycle. e stalls in the iq. clock 6 : fp rename register 0 is deallocated. b is gone, but its fp rename register deallocation is delayed for one clock. c is finished and permitted to retire. d moves to the adder stage. e is marked for dispatch. clock 7 : fp rename register 1 is deallocated. c steps the fp wait stage. d steps to the normalizer stage. e is dispatched to the multiplier stage and is allocated fp rename register 0. at this point, the pattern of stalls repeats. again, note the dispatch stall during clock 3. this is caused by all of m/a/n stages being in use. also note the dispatch stall during clock 5. this is caused by all of the rename registers being tied up (a rename register must be deallocated for one clock before it can be reused).
risc microprocessor division page 35 fpu and completion unit loop: lfsu f22,4(r20) ; a fmadd f15,f16,f13,f28 ; b lfsu f23,4(r21) ; c fmadd f18,f19,f14,f29 ; d lfsu f13,4(r20) ; e fmadd f25,f24,f22,f30 ; f lfsu f14,4(r21) ; g bdnz loop ; h cycles completion queue 1234 fg def bcde abcd
risc microprocessor division page 36 completion of floating-point unit instructions is a potential source of stalls. due to the single write-back port on the floating point register file, multiple instructions trying to write back floating-point results will have to do so in a sequential manner. this will typically happen in matrix math where math operations occur in parallel with loads that initialize registers for subsequent math operations. the code segment above depicts such a scenario. adjacent load and fmadd instructions have no register dependencies nor do they require the same execution unit. therefore, each pair can dispatch together, execute in parallel, and even finish (update rename registers) in parallel. however, due to the single write back port, this code has an effective throughput of only a single instruction per clock. if this code is part of a larger code segment that includes integer instructions, then it is possible to achieve a greater instruction throughput by intermixing integer instructions (from elsewhere in the code sequence) with these floating-point instructions. this will allow the integer execution and write-back to overlap with the floating-point write-back, thereby improving the overall instruction throughput on the entire code segment.
risc microprocessor division page 37 load/store hierarchy bus store queue 1 store queue 0 load miss store miss data load biu queues reservation station ea calculation store 0 store 1 lsu dc biu dispatcher lsu eib lsu store queue loads stores
risc microprocessor division page 38 the load/store hierarchy within the powerpc chip consists of the load/store unit (lsu), data cache (dc), and the bus interface unit (biu). the lsu stages consist of a two-element eib, to receive dispatched instructions and calculate effective addresses, and a two-element store queue, to hold stores waiting for the data cache. the data cache stages consist of slots for a load miss and a store miss. only one miss can be handled at a time. the biu stages consist of a number of one-element queues, such as the data load and store queues. each queue can hold a separate instruction waiting for access to memory. instructions are first dispatched from the instruction queue (iq) to the lsu eib, which has two slots: the reservation station slot (lsu rs) and an effective address calculation slot (lsu ea). an instruction is held in the lsu ea slot until its address operand is available. normally if the lsu is available for dispatch (see below), then the instruction is dispatched directly to the lsu ea slot, if both slots are empty. if the lsu ea slot is occupied, then the instruction is dispatched to the lsu rs slot. once the instructions effective address has been calculated, its progress through the pipeline depends on whether it is a load or a store. a load would then access the data cache (dc), as described later. the loads entry in the completion queue (cq) is marked finished when the data for the load returns. a store would pass to the first lsu store queue slot, and its entry in the cq would be marked finished. thus, a store can be considered finished and even retired from the completion queue long before its data is actually written to cache or to memory. on the next clock cycle, the store passes to the second lsu store queue slot and, on the subsequent clock, it is free to access the data cache. note that because a store must traverse two additional slots than a load before accessing the data cache, a load instruction may bypass preceding stores within the lsu. also, if both a load (in the lsu ea slot) and a store (in the second lsu store queue slot) are free to access the data cache, then the load will take precedence.
risc microprocessor division page 39 data cache miss stall lsu sq dc load miss dc store miss a stw r3, 0(r4) b lwz r5, 0(r6) c stw r7, 0(r8) lsu sq dc load miss dc store miss a b c a stw r3, 0(r4) b stw r5, 0(r6) c stw r7, 0(r8) d lwz r9, 0(r10) d lsu sq b dc load miss dc store miss a c figure 1 figure 2 - store stalls figure 3 - load stall lsu ea lsu rs lsu ea lsu rs lsu ea lsu rs
risc microprocessor division page 40 although superscalar architectures feature multiple execution paths, resource limitations can stall full utilization of these paths. data cache misses are the primary cause of stalls in the lsu. the example above demonstrates how stalls can occur because the 603e data cache can only handle one miss at a time. when a load or store misses in the data cache, the data cache asserts a busy signal that stalls subsequent instructions in the lsu, as shown in figure 1 . while the data cache is busy, no other instructions can access the data cache, and instructions are blocked from leaving the lsu ea stage. this prevents a store from propagating from the lsu ea stage to the lsu store queue (lsu sq), even if the store queue is available. for a load miss access, the data cache is busy until the data comes back from the biu. for a store miss access, the data cache is busy until the store is able to propagate to the biu. figure 2 demonstrates store stalls. while load b is waiting for its data to come back, store a may not access the data cache, and store c may not propagate to the lsu store queue. note that load b bypassed store a in the lsu. figure 3 demonstrates a load stall. load d may not access the data cache until store a propagates to the biu. when it does, the data cache is no longer busy, and load d will bypass stores b and c.
risc microprocessor division page 41 address alias stall c lsu sq a dc load miss dc store miss b b lsu sq a dc load miss dc store miss b lsu sq a dc load miss dc store miss b b lsu sq a dc load miss dc store miss b lsu sq a dc load miss dc store miss figure 1 figure 2 figure 3 figure 4 figure 5 lsu ea lsu rs lsu ea lsu rs lsu ea lsu rs lsu ea lsu rs lsu ea lsu rs
risc microprocessor division page 42 to understand the flow within a superscalar architecture, one cannot ignore instruction-specific details. for example, consider figure 1 , in which load c would ordinarily bypass stores a and b. however, if the data address of c can potentially collide ( alias ) with the data address of a or b, then c will stall in the lsu ea slot until the aliasing store passes out of the lsu store queue. address translation may occur after alias checking. since only the lower 12 bits remain constant through translation, these are the only bits that can be checked. in addition, the addresses are checked with word granularity (four bytes, mask = 0xffc) if the sizes of both load and store are less than or equal to four bytes, or with double-word granularity (eight bytes, mask = 0xff8) otherwise. for instance, 0x2000 and 0x3003 would alias to each other, but 0x2000 and 0x2020 would not. note that it is possible to have an alias stall even if the load and store do not actually access the same location, because only the lower 12 bits of the address can be compared. in a superscalar architecture, other stalls may occur due to timing considerations. for example, if a load which aliases a store has spent only one cycle in the lsu ea stage, then the lsu circuitry is not fast enough to prevent the load from bypassing the store in accessing the data cache. since this aliased load should not access the cache before the store, the lsu must cancel the load in the subsequent cycle. figures 2-5 depict this situation. in figure 2 , load b and store a have aliasing addresses. if b has been in the lsu ea stage for more than one cycle (due to some other stall), then there is time to prevent it from accessing the data cache, and the next cycle a will access the data cache. however, if b has only been in the lsu ea for one cycle, the alias check comes too late to prevent the cache access shown in figure 3 . a is stalled and cannot access the cache. in the next cycle ( figure 4 ), the load is canceled, and in figure 5 the store propagates to the data cache. note that in this example, the store also misses in the cache and blocks the load from accessing the data cache the next cycle.
risc microprocessor division page 43 completion queue stall lsu sq a dc load miss dc store miss figure 1 lsu sq a dc load miss dc store miss figure 2 lsu ea lsu rs lsu ea lsu rs
risc microprocessor division page 44 superscalar architectures frequently signal between different parts of the architecture, in order to coordinate various aspects of the units. diagrams may not always show all the signals that are shared between units with the system. these interactions can also cause stalls. we discuss a case in which the state of the completion queue can affect instruction flow in the lsu. since the 603e allows out-of-order execution, instructions will frequently dispatch to the lsu (as well as other execution units) before previous instructions have finished executing. if one of these previous instructions generates an exception, then all subsequent instructions (including the lsu instruction) must be canceled from the instruction flow ( flushed ). various parts of the processor, including the lsu, must be careful to stall instructions that could be canceled before they permanently change the processor state. on the 603e, if a load or stores entry in the completion queue is not in the bottom slot, then there are preceding instructions that could potentially generate exceptions which may cancel the load or store. the instruction must be stalled before it reaches a state that cannot be canceled. figures 1-2 depict this situation, in which instruction a is stalled because its entry in the completion queue is not in the bottom slot. in figure 1 , store a is stalled in the second slot of the lsu store queue, since writing to the data cache would incur too much of a penalty to undo. in figure 2 , load a is stalled in the data cache miss slot if it is accessing guarded memory. guarded memory is typically used to prevent out-of-order loads to i/o devices, which may produce undesired results otherwise. note that even if load a were at the bottom of the completion queue, the 603e would stall the load for one cycle before making its request to the biu.
risc microprocessor division page 45 lsu ea stall b lsu sq c dc load miss dc store miss a lsu sq dc load miss dc store miss b lsu sq dc load miss dc store miss a b c b lsu sq c dc load miss dc store miss b figure 1 figure 2 figure 3 figure 4 lsu ea lsu rs lsu ea lsu rs lsu ea lsu rs lsu ea lsu rs
risc microprocessor division page 46 the fast timing requirements of superscalar processors sometimes lead to unusual types of stalls. if a load has spent only one cycle in the lsu ea slot before accessing the data cache, then it is removed from the lsu after this access (assuming that the access is not canceled). however, if a load spends more than one cycle in the lsu ea slot, then it will appear to remain in this slot (blocking subsequent lsu instructions) even after the load has accessed the data cache. this block will remain until the data becomes available (and the load is marked finished in the completion queue). in figure 1 , a flows into the lsu ea slot and flows out in figure 2 . this allows c to be dispatched to the lsu. however, because b is stalled in the lsu ea slot, in figure 3 when b accesses the data cache it keeps its entry in the lsu ea slot in figure 4 . this stalls c and stalls dispatch of any subsequent load/store instruction until the data for b returns from the biu.
risc microprocessor division page 47 misaligned address stall a lsu sq a1 dc load miss dc store miss lsu sq a2 dc load miss dc store miss figure 1 - misaligned store figure 2 - misaligned load a lsu sq a1 dc load miss dc store miss lsu sq a2 dc load miss dc store miss a lsu ea lsu rs lsu ea lsu rs lsu ea lsu rs lsu ea lsu rs
risc microprocessor division page 48 accessing misaligned addresses can result in significant performance penalties in most risc superscalar microarchitectures. on the 603e, a data address is aligned if it falls on a multiple of the access size. thus a word (4-byte) access is aligned at 0x0000 and 0x0004 but not at 0x0002; a doubleword (8-byte) access is aligned at 0x0000 and 0x0008 but not at 0x0004. if the data address of a load or store in the lsu ea slot is not aligned, then it is split into two aligned accesses. figure 1 shows a misaligned store. a is first split into the aligned store a1, then on the next clock it is split into the aligned store a2 and the lsu ea entry removed. in figure 2 we have a misaligned load, in which a is split into aligned loads a1 and a2. note that because the load stayed in the lsu ea slot for more than one cycle, it remains in this slot until its data comes back (see lsu ea stall above).
risc microprocessor division page 49 instruction interactions rename register stall instruction queue completion queue a b a b c d a b c d e f b c d e f g h b c d e f g h figure 1 figure 2 figure 3 figure 4 figure 5
risc microprocessor division page 50 as with the other execution units, there may also be stalls due to contention for the rename registers. figures 1-5 show the interaction between the iq and cq for a series of lwzu s which are fetched into the instruction queue two at a time. each lwzu uses two general purpose register (gpr) rename registers, one for the address operand and one for the data operand. the 603e has five gpr rename registers available (and four fpr rename registers). in figure 3 , instruction c cannot dispatch because a and b have already taken four gpr rename registers and there is only one available. later, when a retires and releases its rename registers ( figure 4 ), c has the resources it needs to dispatch. it dispatches the following cycle ( figure 5 ).
risc microprocessor division page 51 instruction interactions dependency stall a lwzx r13,r14,r15 a lwzx r13,r14,r15 b add r26,r27,r13 b lis r20,0xdead c lis r20,0xdead c stwu r16,4(r17) d stwu r16,4(r17) d ori r20,r20,0xbeef e ori r20,r20,0xbeef e add r26,r27,r13 f cmpw r26,r20 f cmpw r26,r20 12345 stw ori lis stw cmp add add lis ori lwz lwz add stw cmp 12345 ori stw add lis lis ori cmp lwz lwz stw add completion queue completion queue clock cycle clock cycle original sequence reordered sequence
risc microprocessor division page 52 the mix of instructions in an instruction sequence can result in a variety of stalls. dependency stalls are the most common. a dependency occurs if one instruction uses as its source data the results from another instruction. such a dependency will cause a stall if the two instructions are placed right next to each other. the 603e reduces the impact of most of these situations through use of the rename registers and forwarding of results. however, in some situations, stalls can happen as follows. two orderings of a code sequences are shown. in both sequences, the add instruction uses as its source the results of the lwzx load instruction. in the original code, the add occurs right after the lwzx . in the reordered sequence, the add is separated from the lwzx by moving it down three instructions. analysis of original code sequence: assuming the lwzx hits in the data cache, its data will return in 2 clocks. although both the add and the lwzx can be dispatched to the completion queue in the same clock, the add cannot begin execution until the data from the lwzx returns. therefore it cannot retire with the lwzx and is stalled by one clock. the lis dispatches to the sru, executes, and is ready to retire with the add in cycle 3. in cycle 4, the stwu and ori can also retire together. then in cycle 5, the cmpw retires alone. total time: 5 clocks. analysis of reordered code sequence: the lwzx (cache hit) takes 2 clocks. since the lis is not dependent on the lwzx , it can retire with the lwzx in clock 2. the stwu and ori can also retire together on the next clock (clock 3). finally, in clock 4, the add and cmpw retire together. total time: 4 clocks. thus by separating the generation of a result from the subsequent use of that result, we were able to prevent a stall. it is normally a good practice to provide this separation; however in some cases the benefit gained in one place is lost in another place.
risc microprocessor division page 53 summary ? scheduling code around superscalar microprocessor resource constraints reduces code stall conditions ? code stalls can occur in instruction issue/completion control logic C availability of instruction and completion buffers C availability of rename registers C number of register file write ports ? code stalls can occur within execution units C aliasing between loads and stores C misaligned accesses ? code stalls can occur due to instruction mixes C dependencies between instructions
risc microprocessor division page 54 by being able to process multiple instructions at the same time, superscalar microprocessors like the 603e enable systems to attain extremely high levels of performance. however, there are many aspects of a superscalar architecture that can cause code stalls in the instruction flow. by being aware of constraints that cause code stall conditions, one can generate code that can will execute with minimal latency in a superscalar processor. this paper has discussed the aspects of powerpc 603e that can cause stalls. although this paper is specific to the 603e, the lessons learned can be applied to most contemporary superscalar microprocessors.


▲Up To Search▲   

 
Price & Availability of POWERPC603

All Rights Reserved © IC-ON-LINE 2003 - 2022  

[Add Bookmark] [Contact Us] [Link exchange] [Privacy policy]
Mirror Sites :  [www.datasheet.hk]   [www.maxim4u.com]  [www.ic-on-line.cn] [www.ic-on-line.com] [www.ic-on-line.net] [www.alldatasheet.com.cn] [www.gdcy.com]  [www.gdcy.net]


 . . . . .
  We use cookies to deliver the best possible web experience and assist with our advertising efforts. By continuing to use this site, you consent to the use of cookies. For more information on cookies, please take a look at our Privacy Policy. X